Brain-based translation: fMRI decoding of spoken words in bilinguals reveals language-independent semantic representations in anterior temporal lobe.
نویسندگان
چکیده
Bilinguals derive the same semantic concepts from equivalent, but acoustically different, words in their first and second languages. The neural mechanisms underlying the representation of language-independent concepts in the brain remain unclear. Here, we measured fMRI in human bilingual listeners and reveal that response patterns to individual spoken nouns in one language (e.g., "horse" in English) accurately predict the response patterns to equivalent nouns in the other language (e.g., "paard" in Dutch). Stimuli were four monosyllabic words in both languages, all from the category of "animal" nouns. For each word, pronunciations from three different speakers were included, allowing the investigation of speaker-independent representations of individual words. We used multivariate classifiers and a searchlight method to map the informative fMRI response patterns that enable decoding spoken words within languages (within-language discrimination) and across languages (across-language generalization). Response patterns discriminative of spoken words within language were distributed in multiple cortical regions, reflecting the complexity of the neural networks recruited during speech and language processing. Response patterns discriminative of spoken words across language were limited to localized clusters in the left anterior temporal lobe, the left angular gyrus and the posterior bank of the left postcentral gyrus, the right posterior superior temporal sulcus/superior temporal gyrus, the right medial anterior temporal lobe, the right anterior insula, and bilateral occipital cortex. These results corroborate the existence of "hub" regions organizing semantic-conceptual knowledge in abstract form at the fine-grained level of within semantic category discriminations.
منابع مشابه
EEG decoding of spoken words in bilingual listeners: from words to language invariant semantic-conceptual representations
Spoken word recognition and production require fast transformations between acoustic, phonological, and conceptual neural representations. Bilinguals perform these transformations in native and non-native languages, deriving unified semantic concepts from equivalent, but acoustically different words. Here we exploit this capacity of bilinguals to investigate input invariant semantic representat...
متن کاملModality independence of word comprehension.
Functional magnetic resonance imaging (fMRI) was used to examine the functional anatomy of word comprehension in the auditory and visual modalities of presentation. We asked our subjects to determine if word pairs were semantically associated (e.g., table, chair) and compared this to a reference task where they were asked to judge whether word pairs rhymed (e.g., bank, tank). This comparison sh...
متن کاملFractionating the anterior temporal lobe: MVPA reveals differential responses to input and conceptual modality
Words activate cortical regions in accordance with their modality of presentation (i.e., written vs. spoken), yet there is a long-standing debate about whether patterns of activity in any specific brain region capture modality-invariant conceptual information. Deficits in patients with semantic dementia highlight the anterior temporal lobe (ATL) as an amodal store of semantic knowledge but thes...
متن کاملمقایسه حافظه رویدادی و معنایی در مبتلایان و غیر مبتلایان به صرع لوب گیجگاهی
Patients with epilepsy are at the risk of cognitive disorders and abnormalities of behavior. The purpose of this study is to compare the episodic memory and semantic memory in patients with temporal lobe epilepsy and healthy control subjects. This is a causal – comparative study. The study population was patients with epilepsy Iranian epilepsy society in Tehran, that, among 20 patients wi...
متن کاملModality-independent decoding of semantic information from the human brain.
An ability to decode semantic information from fMRI spatial patterns has been demonstrated in previous studies mostly for 1 specific input modality. In this study, we aimed to decode semantic category independent of the modality in which an object was presented. Using a searchlight method, we were able to predict the stimulus category from the data while participants performed a semantic catego...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- The Journal of neuroscience : the official journal of the Society for Neuroscience
دوره 34 1 شماره
صفحات -
تاریخ انتشار 2014